23 research outputs found

    NBTI and leakage aware sleep transistor design for reliable and energy efficient power gating

    No full text
    In this paper we show that power gating techniques become more effective during their lifetime, since the aging of sleep transistors (STs) due to negative bias temperature instability (NBTI) drastically reduces leakage power. Based on this property, we propose an NBTI and leakage aware ST design method for reliable and energy efficient power gating. Through SPICE simulations, we show lifetime extension up to 19.9x and average leakage power reduction up to 14.4% compared to standard STs design approach without additional area overhead.Finally, when a maximum 10-year lifetime target is considered, we show that the proposed method allows multiple beneficial options compared to a standard STs design method: either to improve circuit operating frequency up to 9.53% or to reduce ST area overhead up to 18.4

    BTI and leakage aware dynamic voltage scaling for reliable low power cache memories

    No full text
    We propose a novel dynamic voltage scaling (DVS)approach for reliable and energy efficient cache memories. First, we demonstrate that, as memories age, leakage power reduction techniques become more effective due to sub-threshold current reduction with aging. Then, we provide an analytical model and a design exploration framework to evaluate trade-offs between leakage power and reliability, and propose a BTI and leakage aware selection of the “drowsy” state retention voltage for DVS of cache memories. We propose three DVS policies, allowing us to achieve different power/reliability trade-offs. Through SPICE simulations, we show that a critical charge and a static noise margin increase up to 150% and 34.7%, respectively, is achieved compared to standard aging unaware drowsy technique, with a limited leakage power increase during the very early lifetime, and with leakage energy saving up to 37% in 10 years of operation. These improvements are attained at zero or negligible area cos

    High quality testing of grid style power gating

    No full text
    This paper shows that existing delay-based testing techniques for power gating exhibit fault coverage loss due to unconsidered delays introduced by the structure of the virtual voltage power-distribution-network (VPDN). To restore this loss, which could reach up to 70.3% on stuck-open faults, we propose a design-for-testability (DFT) logic that considers the impact of VPDN on fault coverage in order to constitute the proper interface between the VPDN and the DFT. The proposed logic can be easily implemented on-top of existing DFT solutions and its overhead is optimized by an algorithm that offers trade-off flexibility between test-application-time and hardware overhead. Through physical layout SPICE simulations, we show complete fault coverage recovery on stuck-open faults and 43.2% test-application-time improvement compared to a previously proposed DFT technique. To the best of our knowledge, this paper presents the first analysis of the VPDN impact on test qualit

    BTI aware thermal management for reliable DVFS designs

    No full text
    In this paper, we show that dynamic voltage and frequency scaling (DVFS) designs, together with stress-induced BTI variability, exhibit high temperature-induced BTI variability, depending on their workload and operating modes. We show that the impact of temperature-induced variability on circuit lifetime can be higher than that due to stress and exceed 50% over the value estimated considering the circuit average temperature. In order to account for these variabilities in lifetime estimation at design time, we propose a simulation framework for the BTI degradation analysis of DVFS designs accounting for workload and actual temperature profiles. A profile is generated considering statistically probable workload and thermal management constraints by means of the HotSpot tool. Using the proposed framework we explore the expected lifetime of the ethernet circuit from the IWLS05 benchmark suite, synthesized with a 32nm CMOS technology library, for various thermal management constraints. We show that margin-based design can underestimate or overestimate lifetime of DVFS designs by up to 67.8% and 61.9%, respectively. Therefore, the proposed framework allows designers to select appropriately the dynamic thermal management constraints in order to tradeoff long-term reliability (lifetime) and performance with upto 35.8% and 26.3% higher accuracy, respectively, against a temperature-variability unaware BTI analysis

    Susceptible workload driven selective fault tolerance using a probabilistic fault model

    No full text
    In this paper, we present a novel fault tolerance design technique, which is applicable at the register transfer level, based on protecting the functionality of logic circuits using a probabilistic fault model. The proposed technique selects the most susceptible workload of combinational circuits to protect against probabilistic faults. The workload susceptibility is ranked as the likelihood of any fault to bypass the inherent logical masking of the circuit and propagate an erroneous response to its outputs, when that workload is executed. The workload protection is achieved through a Triple Modular Redundancy (TMR) scheme by using the patterns that have been evaluated as most susceptible. We apply the proposed technique on LGSynth91 and ISCAS85 benchmarks and evaluate its fault tolerance capabilities against errors induced by permanent faults and soft errors. We show that the proposed technique, when it is applied to protect only the 32 most susceptible patterns, achieves on average of all the examined benchmarks, an error coverage improvement of 98% and 94% against errors induced by single stuck-at faults (permanent faults) and soft errors (transient faults), respectively, compared to a reduced TMR scheme that protects the same number of susceptible patterns without ranking them

    Susceptible Workload Evaluation and Protection using Selective Fault Tolerance

    Get PDF
    Low power fault tolerance design techniques trade reliability to reduce the area cost and the power overhead of integrated circuits by protecting only a subset of their workload or their most vulnerable parts. However, in the presence of faults not all workloads are equally susceptible to errors. In this paper, we present a low power fault tolerance design technique that selects and protects the most susceptible workload. We propose to rank the workload susceptibility as the likelihood of any error to bypass the logic masking of the circuit and propagate to its outputs. The susceptible workload is protected by a partial Triple Modular Redundancy (TMR) scheme. We evaluate the proposed technique on timing-independent and timing-dependent errors induced by permanent and transient faults. In comparison with unranked selective fault tolerance approach, we demonstrate a) a similar error coverage with a 39.7% average reduction of the area overhead or b) a 86.9% average error coverage improvement for a similar area overhead. For the same area overhead case, we observe an error coverage improvement of 53.1% and 53.5% against permanent stuck-at and transition faults, respectively, and an average error coverage improvement of 151.8% and 89.0% against timing-dependent and timing-independent transient faults, respectively. Compared to TMR, the proposed technique achieves an area and power overhead reduction of 145.8% to 182.0%

    The impact of transistor aging on the reliability of level shifters in nano-scale CMOS technology

    Get PDF
    On-chip level shifters are the interface between parts of an Integrated Circuit (IC) that operate in different voltage levels. For this reason, they are indispensable blocks in Multi-Vdd System-on-Chips (SoCs). In this paper, we present a comprehensive analysis of the effects of Bias Temperature Instability (BTI) aging on the delay and the power consumption of level shifters. We evaluate the standard High-to-Low/Low-to-High level shifters, as well as several recently proposed level-shifter designs, implemented using a 32 nm CMOS technology. Through SPICE simulations, we demonstrate that the delay degradation due to BTI aging varies for each level shifter design: it is 83.3% on average and it exceeds 200% after 5 years of operation for the standard Low-to-High and the NDLSs level shifters, which is 10 × higher than the BTI-induced delay degradation of standard CMOS logic cells. Similarly, we show that the examined designs can suffer from an average 38.2% additional power consumption after 5 years of operation that, however, reaches 180% for the standard level-shifter and exceeds 163% for the NDLSs design. The high susceptibility of these designs to BTI is attributed to their differential signaling structure, combined with the very low supply voltage. Moreover, we show that recently proposed level-up shifter design employing a voltage step-down technique are

    Low power test-compression for high test-quality and low test-data volume

    No full text
    Test data decompressors targeting low power scan testing introduce significant amount of correlation in the test data and thus they tend to adversely affect the coverage of unmodeled defects. In addition, low power decompression needs additional control data which increase the overall volume of test data to be encoded and inevitably increase the volume of compressed test data. In this paper we show that both these deficiencies can be efficiently tackled by a novel pseudorandom scheme and a novel encoding method. The proposed scheme can be combined with existing low power decompressors to increase unmodeled defect coverage and almost totally eliminate control data. Extensive experiments using ISCAS and IWLS benchmark circuits show the effectiveness of the proposed method when it is combined with state-of-the-art decompressors

    State skip LFSRs: bridging the gap between test data compression and test set embedding for IP cores

    No full text
    We present a new type of linear feedback shift registers, state skip LFSRs. state skip LFSRs are normal LFSRs with the addition of a small linear circuit, the State Skip circuit, which can be used, instead of the characteristic-polynomial feedback structure, for advancing the state of the LFSR. In such a case, the LFSR performs successive jumps of constant length in its state sequence, since the State Skip circuit omits a predetermined number of states by calculating directly the state after them. By using State Skip LFSRs we get the well- known high compression efficiency of test set embedding with substantially reduced test sequences, since the useless parts of the test sequences are dramatically shortened by traversing them in state skip mode. The length of the shortened test sequences approaches that of test data compression methods. A systematic method for minimizing the test sequences of re- seeding-based test set embedding methods, and a low overhead decompression architecture are also presente

    Αρχιτεκτονικές ενσωματωμένου ελέγχου

    No full text
    The shrinking of transistor's size in the Very Deep Sub-Micron (VDSM) technologies enabled the manufacturing of Multi-core Systems-on-Chips (MCSoCs) that contain billions of transistors. The exponential decrease in the transistor's manufacturing cost is the main contributor to the widespread use of electronic systems. However, due to manufacturing process imperfections, which cause manufacturing defects, electronic devices need to be tested for compliance with their specifications before they are shipped to customers. Testing of such complex systems becomes increasingly difficult and costly while the overall cost must remain below certain bounds.Manufacturing testing of Integrated Circuits (ICs) is conducted by expensive specialized Automatic Test Equipment (ATE) with limited resources, such as communication channels, memory and channels' bandwidth. Test cost depends on the time a chip spends on an ATE as well as on the utilization of these resources. An efficient testing method should be fast, accurate and must utilize the minimum number of ATE resources. At the same time, outdated ATEs are commonly in use due to the high cost associated with upgrading this equipment.To enable the testing of contemporary dense devices on outdated ATE's, Test Resource Partitioning (TRP) techniques emerged. According to TRP, testing architectures are embedded on chip and operate in synergy with ATEs in order to decrease test cost.Test cost is also affected by both the shrinking of transistor and the high integration of transistors in MCSoCs. As transistor shrinks the amount of tests that are required in order to achieve high defect coverage and assure quality goals increases, due to the ICs becoming more and more sensitive to physical phenomena. Moreover, the high integration of transistors in MCSoCs sets power consumption constraints during testing. Violations of these constraints during testing may cause circuits failures, which do not occur in the field. Power consumption constraints increase further test complexity, since the circuit consumes more power in test mode than in normal mode of operation. As a result, new low-power testing techniques are required in order to handle in a unified manner all test cost related objectives. In this dissertation methods are presented that target multiple test cost objectives. The methods proposed have been applied on both academic benchmark circuits.Η εκθετικός ρυθμός σμίκρυνσης του τρανζίστορ στις τεχνολογίες κατασκευής Ολοκληρωμένων Κυκλωμάτων (ΟΚ) πολύ Βαθιάς Υπο-μικρονικής Ολοκλήρωσης (ΒΥΟ) καθιστά εφικτή την κατασκευή Πολυπύρηνων Συστημάτων εντός Τσιπ (ΠΣεΤ). Η σμίκρυνση αυτή ακολουθείται από ανάλογη μείωση του κόστους κατασκευής ενός τρανζίστορ και είναι ο βασικός καταλύτης για την εξάπλωση της χρήσης ηλεκτρονικών συσκευών. Ωστόσο, εξαιτίας ατελειών της διαδικασίας κατασκευής τους, που προκαλεί κατασκευαστικά σφάλματα, οι ηλεκτρονικές συσκευές απαιτείται να ελεγχθούν για την σύννομη με τις προδιαγραφές τους (σωστή) λειτουργία, πριν σταλούν στους παραλήπτες/αγοραστές τους για χρήση. Τόσο η πολυπλοκότητα όσο και το κόστος της διαδικασίας ελέγχου ορθής λειτουργίας ηλεκτρονικών προϊόντων ΒΥΟ τεχνολογίας καθίστανται διαρκώς αυξανόμενα μεγέθη, την ώρα που το συνολικό κόστος παραγωγής (κόστος κατασκευής + κόστος ελέγχου) απαιτείται να μην υπερβαίνει προκαθορισμένα όρια.Ο έλεγχος ορθής λειτουργίας εκτελείτε από εξειδικευμένο Αυτόματο Εξοπλισμό Ελέγχου (ΑΕΕ), ο οποίος έχει πεπερασμένους διαθέσιμους πόρους σε κανάλια επικοινωνίας, σε εσωτερική μνήμη για την αποθήκευση των τεστ ελέγχου και στο εύρος των καναλιών επικοινωνίας. Το κόστος του ελέγχου δεν εξαρτάται μόνο από τον χρόνο που χρειάζεται να δεσμευθεί το ΑΕΕ για τον έλεγχο ενός τσιπ, αλλά και από την ποιοτική διαχείριση των πόρων του ΑΕΕ. Μια αποδοτική διαδικασία ελέγχου δεν επιβάλλεται μόνο να είναι ταχύτατη και ακριβής αλλά απαιτείται να δεσμεύει όσο το δυνατό λιγότερους από τους διαθέσιμους πόρους του ΑΕΕ επειδή το κόστος αναβάθμισης του εξοπλισμού αυτού είναι τεράστιο και απαγορευτικό. Οι τεχνικές Κατακερματισμού Πόρων Ελέγχου (ΚΠΕ) επιτρέπουν την χρήση παλαιότερης τεχνολογίας ΑΕΕ. Σύμφωνα με τις τεχνικές ΚΠΕ, αρχιτεκτονικές ελέγχου ενσωματώνονται στο τσιπ και λειτουργούν σε συνεργασία με τον ΑΕΕ για να μειώσουν το κόστος ελέγχου.Το κόστος ελέγχου επηρεάζεται τόσο από τη σμίκρυνση του τρανζίστορ όσο και από το υψηλό επίπεδο ολοκλήρωσής τους σε ΠΣεΤ. Αφενός, καθώς το μέγεθος των τρανζίστορ μικραίνει, το πλήθος των ελέγχων που απαιτούνται για τον έλεγχο ορθής λειτουργίας τους αυξάνει, επειδή τα ΟΚ είναι πιο ευαίσθητα σε φυσικά φαινόμενα που προκαλούν νέα πιθανά κατασκευαστικά σφάλματα. Αφετέρου, το υψηλό επίπεδο ολοκλήρωσης σε ΠΣεΤ θέτει περιορισμούς στην κατανάλωση ενέργειας κατά των έλεγχο. Η συμμόρφωση με τους περιορισμούς κατανάλωσης ενέργειας κατά τον έλεγχο καθίσταται τροχοπέδη για την ταχύτητα και την ποιότητα του ελέγχου όχι μόνο επειδή τα ΟΚ καταναλώνουν πολύ περισσότερη ενέργεια κατά τον έλεγχο σε σχέση με την κανονικής τους λειτουργία, αλλά και επειδή η παραβίαση αυτών των περιορισμών μπορεί να προκαλέσει φαινομενικές αποτυχίες στη λειτουργία τους που δεν θα συνέβαιναν στην κανονική τους λειτουργία. Επομένως, είναι απαραίτητη η ανάπτυξη νέων τεχνικών ελέγχου ορθής λειτουργίας ΟΚ, οι οποίες θα διαχειρίζονται ενοποιημένα όλους τους παράγοντες που επηρεάζουν το κόστος ελέγχου.Στην διατριβή αυτή παρουσιάζονται μέθοδοι ελέγχου ολοκληρωμένων κυκλωμάτων που στοχεύουν στην μείωση της επίδρασης πολλαπλών παραγόντων στο κόστος ελέγχου. Τα αποτελέσματα των μεθόδων έχουν επαληθευθεί με εφαρμογή των μεθόδων σε ακαδημαϊκά κυκλώματα σύγκρισης
    corecore